Goto

Collaborating Authors

 review board


Investigating Algorithm Review Boards for Organizational Responsible Artificial Intelligence Governance

Hadley, Emily, Blatecky, Alan, Comfort, Megan

arXiv.org Artificial Intelligence

Organizations including companies, nonprofits, governments, and academic institutions are increasingly developing, deploying, and utilizing artificial intelligence (AI) tools. Responsible AI (RAI) governance approaches at organizations have emerged as important mechanisms to address potential AI risks and harms. In this work, we interviewed 17 technical contributors across organization types (Academic, Government, Industry, Nonprofit) and sectors (Finance, Health, Tech, Other) about their experiences with internal RAI governance. Our findings illuminated the variety of organizational definitions of RAI and accompanying internal governance approaches. We summarized the first detailed findings on algorithm review boards (ARBs) and similar review committees in practice, including their membership, scope, and measures of success. We confirmed known robust model governance in finance sectors and revealed extensive algorithm and AI governance with ARB-like review boards in health sectors. Our findings contradict the idea that Institutional Review Boards alone are sufficient for algorithm governance and posit that ARBs are among the more impactful internal RAI governance approaches. Our results suggest that integration with existing internal regulatory approaches and leadership buy-in are among the most important attributes for success and that financial tensions are the greatest challenge to effective organizational RAI. We make a variety of suggestions for how organizational partners can learn from these findings when building their own internal RAI frameworks. We outline future directions for developing and measuring effectiveness of ARBs and other internal RAI governance approaches.


13 Best Code Review Tools for Developers (2023 Edition)

#artificialintelligence

Code review is a part of the software development process which involves testing the source code to identify bugs at an early stage. A code review process is typically conducted before merging with the codebase. An effective code review prevents bugs and errors from getting into your project by improving code quality at an early stage of the software development process. In this post, we'll explain what code review is and explore popular code review tools that help organizations with the code review process. The primary goal of the code review process is to assess any new code for bugs, errors, and quality standards set by the organization. The code review process should not just consist of one-sided feedback. Therefore, an intangible benefit of the code review process is the collective team's improved coding skills. If you would like to initiate a code review process in your organization, you should first decide who would review the code. If you belong to a small team, you may assign team leads to review all code.


5 risks of AI and machine learning that modelops remediates

#artificialintelligence

Let's say your company's data science teams have documented business goals for areas where analytics and machine learning models can deliver business impacts. Now they are ready to start. They've tagged data sets, selected machine learning technologies, and established a process for developing machine learning models. They have access to scalable cloud infrastructure. Is that sufficient to give the team the green light to develop machine learning models and deploy the successful ones to production?


Rob Reich: AI developers need a code of responsible conduct

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Rob Reich wears many hats: political philosopher, director of the McCoy Family Center for Ethics in Society, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence. In recent years, Reich has delved deeply into the ethical and political issues posed by revolutionary technological advances in artificial intelligence (AI). His work is not always easy for technologists to hear. In his book, System Error: Where Big Tech Went Wrong and How We Can Reboot, Reich and his co-authors (computer scientist Mehran Sahami and social scientist Jeremy M. Weinstein) argued that tech companies and developers are so fixated on "optimization" that they often trample on human values.


Why foundation models in AI need to be released responsibly

#artificialintelligence

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University. Humans are not very good at forecasting the future, especially when it comes to technology. Foundation models are a new class of large-scale neural networks with the ability to generate text, audio, video and images. These models will anchor all kinds of applications and hold the power to influence many aspects of society. It's difficult for anyone, even experts, to imagine where this technology will lead in the coming years.


The Time Is Now to Develop Community Norms for the Release of Foundation Models

#artificialintelligence

As foundation models (e.g., GPT-3, PaLM, DALL-E 2) become more powerful and ubiquitous, the issue of responsible release becomes critically important. In this blog post, we use the term release to mean research access: foundation model developers making assets such as data, code, and models accessible to external researchers. Deploying to users for testing and collecting feedback (Ouyang et al. 2022; Scheurer et al. 2022; AI Test Kitchen) and deploying to end users in products (Schwartz et al. 2022) are other forms of release that are out of scope for this blog post. Foundation model developers presently take divergent positions on the topic of release and research access. For example, EleutherAI, Meta, and the BigScience project led by Hugging Face embrace broadly open release (see EleutherAI's statement and Meta's recent release). In contrast, OpenAI advocates for a staged release and currently provides the general public with only API access; Microsoft also provides API access, but to a restricted set of academic researchers.


The art of artificial intelligence: a recent copyright law development

#artificialintelligence

The company and law firm names shown above are generated automatically based on the text of the article. We are improving this feature as we continue to test and develop in beta. We welcome feedback, which you can provide using the feedback tab on the right of the page. April 22, 2022 - Over the past several years, comedy writer Keaton Patti has popularized "bot scripts," in which he parodically imagines how a computer might synthesize 1,000 or more hours of information and then create its own imitative work. My personal favorite was a holiday-themed romantic comedy script, in which a "business man," whose "hands are briefcases," courts a "single mother," who "cannot date because of a snow curse."


NATO ups the ante on disruptive tech, artificial intelligence

#artificialintelligence

NATO has officially kicked off two new efforts meant to help the alliance invest in critical next-generation technologies and avoid capability gaps between its member nations. For months, officials have set the ground stage to launch a new Defense Innovator Accelerator -- nicknamed DIANA -- and establish an innovation fund to support private companies developing dual-use technologies. Both of those measures were formally agreed upon during NATO's meeting of defense ministers last month in Brussels, said Secretary-General Jens Stoltenberg. Allies signed the agreement to establish the NATO Innovation Fund and launch DIANA on Oct. 22, the final day of the two-day conference, Stoltenberg said in a media briefing that day. He expects the fund to invest €1 billion (U.S. $1.16 billion) into companies and academic partners working on emerging and disruptive technologies.


Google 'betrays patient trust' with DeepMind Health move

The Guardian

Google has been accused of breaking promises to patients, after the company announced it would be moving a healthcare-focused subsidiary, DeepMind Health, into the main arm of the organisation. The restructure, critics argue, breaks a pledge DeepMind made when it started working with the NHS that "data will never be connected to Google accounts or services". The change has also resulted in the dismantling of an independent review board, created to oversee the company's work with the healthcare sector, with Google arguing that the board was too focused on Britain to provide effective oversight for a newly global body. Google says the restructure is necessary to allow DeepMind's flagship health app, Streams, to scale up globally. The app, which was created to help doctors and nurses monitor patients for AKI, a severe form of kidney injury, has since grown to offer a full digital dashboard for patient records.


Unrestricted Adversarial Examples

Brown, Tom B., Carlini, Nicholas, Zhang, Chiyuan, Olsson, Catherine, Christiano, Paul, Goodfellow, Ian

arXiv.org Machine Learning

We introduce a two-player contest for evaluating the safety and robustness of machine learning systems, with a large prize pool. Unlike most prior work in ML robustness, which studies norm-constrained adversaries, we shift our focus to unconstrained adversaries. Defenders submit machine learning models, and try to achieve high accuracy and coverage on non-adversarial data while making no confident mistakes on adversarial inputs. Attackers try to subvert defenses by finding arbitrary unambiguous inputs where the model assigns an incorrect label with high confidence. We propose a simple unambiguous dataset ("bird-or- bicycle") to use as part of this contest. We hope this contest will help to more comprehensively evaluate the worst-case adversarial risk of machine learning models.